Goto

Collaborating Authors

 euclidean distance


Structure-Preserving Multi-View Embedding Using Gromov-Wasserstein Optimal Transport

Eufrazio, Rafael Pereira, Montesuma, Eduardo Fernandes, Cavalcante, Charles Casimiro

arXiv.org Machine Learning

Multi-view data analysis seeks to integrate multiple representations of the same samples in order to recover a coherent low-dimensional structure. Classical approaches often rely on feature concatenation or explicit alignment assumptions, which become restrictive under heterogeneous geometries or nonlinear distortions. In this work, we propose two geometry-aware multi-view embedding strategies grounded in Gromov-Wasserstein (GW) optimal transport. The first, termed Mean-GWMDS, aggregates view-specific relational information by averaging distance matrices and applying GW-based multidimensional scaling to obtain a representative embedding. The second strategy, referred to as Multi-GWMDS, adopts a selection-based paradigm in which multiple geometry-consistent candidate embeddings are generated via GW-based alignment and a representative embedding is selected. Experiments on synthetic manifolds and real-world datasets show that the proposed methods effectively preserve intrinsic relational structure across views. These results highlight GW-based approaches as a flexible and principled framework for multi-view representation learning.


e464656edca5e58850f8cec98cbb979b-Supplemental.pdf

Neural Information Processing Systems

To be consistent with accuracy definition, we denote the correctness ofstj for instance t as sim(stj,rt) = ( 2 distance(stj,rt))/ 2 where sim(stj,rt) is in the range [0,1] and distance(stj,rt) is in range [0, 2], 2 is the largest Euclidean distance in the probability simplex. Given a test dataset I, the correctness of a learner SLj on I can be denoted as 2 corrSLj = 1n Pn t=1sim(stj,rt). In this section, we define multiple metrics for consistency, accuracy, and correct-consistency in detail. Figure 1 shows the metrics computation in our experiments. We have created a git repository for this work and will be posted upon the acceptance and publicationofthiswork.


Permutation-InvariantVariationalAutoencoderfor Graph-LevelRepresentationLearning

Neural Information Processing Systems

Most work, however, focuses on either node-or graph-level supervised learning, such as node, link or graph classification or node-level unsupervised learning (e.g., node clustering). Despite its wide range of possible applications, graph-level unsupervised representation learning has not received much attention yet. This might be mainly attributed to the high representation complexity ofgraphs, which can berepresented byn!equivalent adjacencymatrices, where n is the number of nodes. In this work we address this issue by proposing a permutation-invariant variational autoencoder for graph structured data.







Don't take it lightly: Phasing optical random projections with unknown operators

Sidharth Gupta, Remi Gribonval, Laurent Daudet, Ivan Dokmanić

Neural Information Processing Systems

In this paper we tackle the problem of recovering the phase of complex linear measurements whenonlymagnitude information isavailableandwecontrol the input. We are motivated by the recent development of dedicated optics-based hardware for rapid random projections which leverages the propagation of light inrandom media.